A Convergence Theorem for Sequential Learning in Two-Layer Perceptrons

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Convergence Theorem for Sequential Learning in Two Layer Perceptrons

We consider a Perceptron with Ni input units, one output and a yet unspecified number of hidden units. This Perceptron must be able to learn a given but arbitrary set of input-output examples. By sequential learning we mean that groups of patterns, pertaining to the same class, are sequentially separated from the rest by successively adding hidden units until the remaining patterns are all in t...

متن کامل

Convergence in a sequential two stages decision making process

We analyze a sequential decision making process, in which at each stepthe decision is made in two stages. In the rst stage a partially optimalaction is chosen, which allows the decision maker to learn how to improveit under the new environment. We show how inertia (cost of changing)may lead the process to converge to a routine where no further changesare made. We illustrate our scheme with some...

متن کامل

convergence in a sequential two stages decision making process

we analyze a sequential decision making process, in which at each stepthe decision is made in two stages. in the rst stage a partially optimalaction is chosen, which allows the decision maker to learn how to improveit under the new environment. we show how inertia (cost of changing)may lead the process to converge to a routine where no further changesare made. we illustrate our scheme with som...

متن کامل

Convergence of stochastic learning in perceptrons with binary synapses.

The efficacy of a biological synapse is naturally bounded, and at some resolution, and is discrete at the latest level of single vesicles. The finite number of synaptic states dramatically reduce the storage capacity of a network when online learning is considered (i.e., the synapses are immediately modified by each pattern): the trace of old memories decays exponentially with the number of new...

متن کامل

Two-Layer Contractive Encodings with Linear Transformation of Perceptrons for Semi-Supervised Learning

It is difficult to train a multi-layer perceptron (MLP) when there are only a few labeled samples available. However, by pretraining an MLP with vast amount of unlabeled samples available, we may achieve better generalization performance. Schulz et al. (2012) showed that it is possible to pretrain an MLP in a less greedy way by utilizing the two-layer contractive encodings, however, with a cost...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Europhysics Letters (EPL)

سال: 1990

ISSN: 0295-5075,1286-4854

DOI: 10.1209/0295-5075/11/6/001